Iterative Concept Learning from Noisy Data Iterative Concept Learning from Noisy Data

نویسنده

  • Gunter Grieser
چکیده

In the present paper, we study iterative learning of indexable concept classes from noisy data. We distinguish between learning from positive data only and learning from positive and negative data; synonymously, learning from text and informant, respectively. Following 20], a noisy text (a noisy informant) for some target concept contains every correct data item innnitely often while in addition some incorrect data is presented. In the text case, incorrect data is presented only nitely many times while, in the informant case, incorrect data can occur innnitely often. An iterative learner successively takes as input one element of an information sequence about a target concept as well as its previously made hypothesis , and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept. In contrast to an unconstrained learning device, an iterative learner has only limited access to the input data provided. We present characterizations of iterative learning from noisy data in terms being independent from learning theory. The relevant conditions are purely structural ones. Furthermore, iterative learning from noisy data is compared with standard learning models such as learning in the limit, conservative inference , and nite learning. Surprisingly, when learning from noisy data is concerned, iterative learners are exactly as powerful as unconstrained learning devices. This nicely contrasts the noise-free case, where iterative learners are strictly less powerful. Moreover, the exact location of iterative learning from noisy data within the hierarchy of the other models of learning indexable concept classes is established. Additionally, we study iterative learning from redundant noise-free data. It turns out that iterative learners are able to exploit redundancy in the input data to overcome, to a certain extent, limitations in their accessibility.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the power of incremental

This paper provides a systematic study of incremental learning from noise-free and from noisy data. As usual, we distinguish between learning from positive data and learning from positive and negative data, synonymously called learning from text and learning from informant. Our study relies on the notion of noisy data introduced by Stephan. The basic scenario, named iterative learning, is as fo...

متن کامل

On the strength of incremental learningSte

This paper provides a systematic study of incremental learning from noise-free and from noisy data, thereby distinguishing between learning from only positive data and from both positive and negative data. Our study relies on the notion of noisy data introduced in 22]. The basic scenario, named iterative learning, is as follows. In every learning stage, an algorithmic learner takes as input one...

متن کامل

Optimization of Concept Prototypes for the Recognition of Noisy Texture Data

This paper presents an approach to the recognition of noisy vision concepts incorporating machine learning and concept optimization techniques. Noisy texture characteristics require the development of techniques that can improve the performance of a texture recognition system. We develop such techniques through the optimization of learned concept prototypes in order to remove noisy and less sig...

متن کامل

Weighing Hypotheses: Incremental Learning from Noisy Data

Incremental learning from noisy data presents dual challenges: that of evaluating multiple hypotheses incrementally and that of distinguishing errors due to noise from errors due to faulty hypotheses. This problem is critical in such areas of machine learning as concept learning, inductive programming, and sequence prediction. I develop a general, quantitative method for weighing the merits of ...

متن کامل

Iterative Learning with Open-set Noisy Labels

Large-scale datasets possessing clean label annotations are crucial for training Convolutional Neural Networks (CNNs). However, labeling large-scale data can be very costly and error-prone, and even high-quality datasets are likely to contain noisy (incorrect) labels. Existing works usually employ a closed-set assumption, whereby the samples associated with noisy labels possess a true class con...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999